primary source
Your Friend Asked You a Question. Don't Copy and Paste an Answer From a Chatbot
Your Friend Asked You a Question. Your friend came to you because they respect your knowledge and opinion, and outsourcing the answer to a machine is lazy and rude. Back in the 2010s, a website called Let Me Google That For You gained a notable amount of popularity for serving a single purpose: snark. The site lets you generate a custom link that you can send somebody who asks you a question. When they click the link, it plays an animation of the process of typing a question into Google.
- North America > United States > Missouri > Jackson County > Kansas City (0.05)
- North America > United States > Tennessee (0.05)
- North America > United States > Kansas > Wyandotte County > Kansas City (0.05)
- (4 more...)
- Leisure & Entertainment (0.48)
- Media (0.31)
- Information Technology > Communications (0.97)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.52)
A Lean Dataset for International Math Olympiad: Small Steps towards Writing Math Proofs for Hard Problems
Yousefzadeh, Roozbeh, Cao, Xuenan
Using AI to write formal proofs for mathematical problems is a challenging task that has seen some advancements in recent years. Automated systems such as Lean can verify the correctness of proofs written in formal language, yet writing the proofs in formal language can be challenging for humans and machines. The miniF2F benchmark has 20 IMO problems in its testing set, yet formal proofs are available only for 7 of these problems (3 of which are written only by mathematicians). The model with best accuracy can only prove 4 of these 20 IMO problems, from 1950s and 60s, while its training set is a secret. In this work, we write complete, original formal proofs for the remaining 13 IMO problems in Lean along with 3 extra problems from IMO 2022 and 2023. This effort expands the availability of proof currently in the public domain by creating 5,150 lines of Lean proof. The goal of the paper is to pave the way for developing AI models that can automatically write the formal proofs for all the IMO problems in miniF2F and beyond. In this pursuit, we devise a method to decompose the proof of these problems into their building blocks, constructing a dataset of about 900 lemmas with 25,500 lines of Lean code. These lemmas are not trivial, yet they are approachable, providing the opportunity to evaluate and diagnose the failures and successes of AI models. We then evaluate the ability of GPT-4 in writing formal proofs for these lemmas with zero shot prompting, CoT reasoning and lemma retrieval. In evaluating the responses, we also analyze the confounding factor of LLM's ability to write the proofs in natural language vs Lean language.
Graph-Convolutional Autoencoder Ensembles for the Humanities, Illustrated with a Study of the American Slave Trade
We introduce a graph-aware autoencoder ensemble framework, with associated formalisms and tooling, designed to facilitate deep learning for scholarship in the humanities. By composing sub-architectures to produce a model isomorphic to a humanistic domain we maintain interpretability while providing function signatures for each sub-architectural choice, allowing both traditional and computational researchers to collaborate without disrupting established practices. We illustrate a practical application of our approach to a historical study of the American post-Atlantic slave trade, and make several specific technical contributions: a novel hybrid graph-convolutional autoencoder mechanism, batching policies for common graph topologies, and masking techniques for particular use-cases. The effectiveness of the framework for broadening participation of diverse domains is demonstrated by a growing suite of two dozen studies, both collaborations with humanists and established tasks from machine learning literature, spanning a variety of fields and data modalities. We make performance comparisons of several different architectural choices and conclude with an ambitious list of imminent next steps for this research.
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.05)
- North America > United States > Texas (0.04)
- North America > United States > Maryland > Baltimore (0.04)
- (3 more...)
Study shows AI program could verify Wikipedia citations, improving reliability
You can't trust everything on a Wikipedia page, which is why it's important that you refer to the original sources cited in the footnotes. But sometimes, even the primary sources can lead you astray. Researchers have developed an AI focused on improving the reliability of Wikipedia references by training the algorithms to identify citations on the website that are questionable. The program, called SIDE, does two things: check if a primary source is accurate and suggest new ones. However, the AI operates under the assumption that a Wikipedia claim is true. This means that, while it can check for the validity of a source, it can't actually verify claims made in an entry.
- North America > United States (0.06)
- Asia > Middle East > Israel (0.06)
Google's New A.I. Search Engine Should Leave the Media Very Worried
This article is from Big Technology, a newsletter by Alex Kantrowitz. At Google's I/O developer conference this week, the company showed an experimental version of its search engine handling an almost unimaginably difficult query. Asked whether a family with kids under three years old and a dog would prefer Arches National Park or Bryce Canyon, Google scoured the internet and returned a lengthy, detailed answer. It noted that while only Bryce had paths that allowed dogs, kids might love the rock formations at Arches, and that Arches still had plenty of dog-friendly campgrounds, pullouts, and roads. "Now, search does the heavy lifting for you," said Google Search VP Cathy Edwards.
- Information Technology > Information Management > Search (1.00)
- Information Technology > Communications (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Information Retrieval (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.34)
Causal DAG extraction from a library of books or videos/movies
Determining a causal DAG (directed acyclic graph) for a problem under consideration, is a major roadblock when doing Judea Pearl's Causal Inference (CI) in Statistics. The same problem arises when doing CI in Artificial Intelligence (AI) and Machine Learning (ML). As with many problems in Science, we think Nature has found an effective solution to this problem. We argue that human and animal brains contain an explicit engine for doing CI, and that such an engine uses as input an atlas (i.e., collection) of causal DAGs. We propose a simple algorithm for constructing such an atlas from a library of books or videos/movies. We illustrate our method by applying it to a database of randomly generated Tic-Tac-Toe games. The software used to generate this Tic-Tac-Toe example is open source and available at GitHub.
Why Do Interviewers Ask Linked List Questions? • Hillel Wayne
A couple years back I gave a talk on researching software history, using "linked list interview questions" as an example topic. Since referring people to a video is less accessible than just writing a blog post, I've reproduced the question here. So why do interviewers like to ask linked list questions? These answers are contradictory: if you want to know if someone knows CS fundamentals, you don't want to give them a problem they can trick their way through, and if you want to test reasoning ability, you don't want to give a problem that they've already seen in CS. Two contradictory answers tells me there's some history involved. My guess is that originally people asked LL questions for a very good reason, and then over time forgot the reason and came up with post-hoc justifications.
Artificial Intelligence Software Market : Segments, Leading Player, Application & Forecast Analysis 2027 – Bulletin Line
Artificial Intelligence Software Market has witnessed continuous growth in the past few years and is projected to grow even further during the forecast period (2020-2027). The research presents a complete assessment of the market and contains Future trend, Current Growth Factors, attentive opinions, facts, historical data, and statistically supported and industry validated market data. Artificial Intelligence Software Market research report also provides an overall analysis of the market share, size, segmentation, revenue forecasts and geographic regions of the Artificial Intelligence Software Market along with industry leading players are studied with respect to their company profile, product portfolio, capacity, price, cost and revenue. The research report also provides detail analysis on the Artificial Intelligence Software Market current applications and comparative analysis with more focused on the pros and cons of Artificial Intelligence Software and competitive analysis of major companies. Key players in the Artificial Intelligence Software market have been identified through secondary research, and their market shares have been determined through primary and secondary research.
Active Multi-Information Source Bayesian Quadrature
Gessner, Alexandra, Gonzalez, Javier, Mahsereci, Maren
Bayesian quadrature (BQ) is a sample-efficient probabilistic numerical method to solve integrals of expensive-to-evaluate black-box functions, yet so far,active BQ learning schemes focus merely on the integrand itself as information source, and do not allow for information transfer from cheaper, related functions. Here, we set the scene for active learning in BQ when multiple related information sources of variable cost (in input and source) are accessible. This setting arises for example when evaluating the integrand requires a complex simulation to be run that can be approximated by simulating at lower levels of sophistication and at lesser expense. We construct meaningful cost-sensitive multi-source acquisition rates as an extension to common utility functions from vanilla BQ (VBQ),and discuss pitfalls that arise from blindly generalizing. Furthermore, we show that the VBQ acquisition policy is a corner-case of all considered cost-sensitive acquisition schemes, which collapse onto one single de-generate policy in the case of one source and constant cost. In proof-of-concept experiments we scrutinize the behavior of our generalized acquisition functions. On an epidemiological model, we demonstrate that active multi-source BQ (AMS-BQ) allocates budget more efficiently than VBQ for learning the integral to a good accuracy.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- (3 more...)
Artificial Intelligence in Medicine Market Is Booming Worldwide
The report starts by an introduction about the company profiling and a comprehensive review about the strategy concept and the tools that can be used to assess and analyze strategy. Porter's Five Forces model is a powerful tool that combines five competitive forces which limit any industry's profit according to external factors. These forces are the threat of new entrants, the customer bargaining power, the supplier bargaining power, the substitution to an alternative product or service, and the intensity of competition among current rivals inside the industry. This market is expected to grow at XXX billion by the end of forecast period with XX.X% of CAGR. The future trends also introduced in the report which elaborates key factors of Global Artificial Intelligence in Medicine such as market opportunities, future market risk, benefit, loss and profit, customer perspective, Innovation, Short Term vs.
- North America > United States > New Jersey > Middlesex County > Edison (0.05)
- Asia > Southeast Asia (0.05)
- Asia > Japan (0.05)
- (2 more...)
- Research Report (0.50)
- Overview (0.36)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.53)
- Information Technology > Hardware (0.42)